Goto

Collaborating Authors

 Saskatchewan


Water flow in prairie watersheds is increasingly unpredictable -- but AI could help

AIHub

In recent years, the Prairies have seen bigger swings in climate conditions -- very wet years followed by very dry ones. That makes an already unpredictable landscape even harder to forecast, with real consequences for flood preparedness and water quality. The challenge is the landscape itself. Much of the Canadian Prairies sit within the Prairie Pothole Region, a landscape dotted with millions of shallow wetlands and depressions. Water doesn't simply run downhill into a stream, it is stored first.


Appendix

Neural Information Processing Systems

ThepolytopeP(X,L) is in fact a "twisted sum" of a finite number of lattice polytopes fibering overP(F,L| F) .



We asked teachers about their experiences with AI in the classroom -- here's what they said

AIHub

We asked teachers about their experiences with AI in the classroom -- here's what they said Since ChatGPT and other large language models burst into public consciousness, school boards are drafting policies, universities are hosting symposiums and tech companies are relentlessly promoting their latest AI-powered learning tools . In the race to modernize education, artificial intelligence (AI) has become the new darling of policy innovation. While AI promises efficiency and personalization, it also introduces complexity, ethical dilemmas and new demands . Teachers, who are at the heart of learning along with students, are watching this transformation with growing unease. For example, according to the Alberta Teachers' Association, 80 to 90 per cent of educators surveyed expressed concern about AI's potential negative effects on education.


What is the chance your plane will be hit by space debris?

MIT Technology Review

What is the chance your plane will be hit by space debris? Explains: Let our writers untangle the complex, messy world of technology to help you understand what's coming next. In mid-October, a mysterious object cracked the windshield of a packed Boeing 737 cruising at 36,000 feet above Utah, forcing the pilots into an emergency landing. The internet was suddenly buzzing with the prospect that the plane had been hit by a piece of space debris. We still don't know exactly what hit the plane--likely a remnant of a weather balloon--but it turns out the speculation online wasn't that far-fetched. That's because while the risk of flights being hit by space junk is still small, it is, in fact, growing.


Spatio-Temporal Graph Convolutional Networks for EV Charging Demand Forecasting Using Real-World Multi-Modal Data Integration

Tupayachi, Jose, Camur, Mustafa C., Heaslip, Kevin, Li, Xueping

arXiv.org Artificial Intelligence

Transportation remains a major contributor to greenhouse gas emissions, highlighting the urgency of transitioning toward sustainable alternatives such as Electric Vehicles (EVs). Yet, uneven spatial distribution and irregular utilization of charging infrastructure create challenges for both power grid stability and investment planning. This study introduces Traffic-Weather Graph Convolutional Network (TW-GCN), a spatio-temporal forecasting framework that combines Graph Convolutional Networks with temporal architectures to predict EV charging demand in Tennessee, United States. We utilize real-world traffic flows, weather conditions, and proprietary data provided by one of the largest U.S.-based EV infrastructure companies to capture both spatial dependencies and temporal dynamics. Extensive experiments across varying forecasting horizons, clustering strategies, and sequence lengths reveal that mid-horizon (3-hour) forecasts achieve the best balance between responsiveness and stability, with One-dimensional convo-lutional neural networks consistently outperforming other temporal models. Regional analysis shows disparities in predictive accuracy across East, Middle, and West Tennessee, reflecting how station density, Points of Interest and local demand variability shape model capabilities. The proposed TW-GCN framework advances the integration of data-driven intelligence into EV infrastructure planning while supporting sustainable mobility transitions.


TABLET: A Large-Scale Dataset for Robust Visual Table Understanding

Alonso, Iñigo, Miranda, Imanol, Agirre, Eneko, Lapata, Mirella

arXiv.org Artificial Intelligence

While table understanding increasingly relies on pixel-only settings where tables are processed as visual representations, current benchmarks predominantly use synthetic renderings that lack the complexity and visual diversity of real-world tables. Additionally, existing visual table understanding (VTU) datasets offer fixed examples with single visualizations and pre-defined instructions, providing no access to underlying serialized data for reformulation. Each example includes paired image-HTML representations, comprehensive metadata, and provenance information linking back to the source datasets. The field of table understanding focuses on techniques for representing and interpreting tabular data to support a wide range of practical tasks such as question answering, summarization, and information extraction. Research in this area has traditionally representated tables as structured text, encoding their content and layout through linearized or graph-based representations (see Figure 1b; Herzig et al. 2020; Zhang et al. 2020; Liu et al. 2022). While this unimodal view remains effective in certain domains, many tables found in documents and webpages contain irregular structures, rely on visual formatting (e.g., merged cells, background colors, font variations), or embed multimodal elements such as images (see Figure 1a). Advances in Vision-Language Models (VLMs; Radford et al. 2021; Liu et al. 2023) have provided impetus for treating tables as images, eschewing the step of rendering them as text sequences (like Markdown or HTML). The conceptual simplicity of this approach, coupled with improved performance on several tabular tasks (Alonso et al., 2024; Zhou et al., 2025) has driven significant research interest (Zheng et al., 2024b; Su et al., 2024; Jiang et al., 2025) in Visual T able Understanding (also known as Multimodal T able Understanding). Visual representations of tables are not only merely convenient but in many cases necessary, particularly for VLM agents that interact with the world exclusively through pixels (e.g., on a screen) and must interpret tables directly in their visual form (Deng et al., 2023; Zheng et al., 2024a; Lu et al., 2024). Despite the growing relevance of VTU, there are few resources that support training models directly on image-based representations of tables. Existing benchmarks like MMTab (Zheng et al., 2024b) consist of web tables (e.g., from Wikipedia which is a common source for many tabular datasets), that are serialized and subsequently rendered as synthetic images (see Figures 1b,c). As a result, models trained on such data face a train-test mismatch, since the visual patterns learned from serialized renderings do not generalize well to naturally occurring tables failing to capture critical visual cues like subtle ruling lines, intricate merged cell layouts, background colors, font variations, or embedded images that are inherent to real-world table comprehension (compare Figure 1a and 1c).


ROBoto2: An Interactive System and Dataset for LLM-assisted Clinical Trial Risk of Bias Assessment

Hevia, Anthony, Chintalapati, Sanjana, Lai, Veronica Ka Wai, Nguyen, Thanh Tam, Wong, Wai-Tat, Klassen, Terry, Wang, Lucy Lu

arXiv.org Artificial Intelligence

We present ROBOTO2, an open-source, web-based platform for large language model (LLM)-assisted risk of bias (ROB) assessment of clinical trials. ROBOTO2 streamlines the traditionally labor-intensive ROB v2 (ROB2) annotation process via an interactive interface that combines PDF parsing, retrieval-augmented LLM prompting, and human-in-the-loop review. Users can upload clinical trial reports, receive preliminary answers and supporting evidence for ROB2 signaling questions, and provide real-time feedback or corrections to system suggestions. ROBOTO2 is publicly available at https://roboto2.vercel.app/, with code and data released to foster reproducibility and adoption. We construct and release a dataset of 521 pediatric clinical trial reports (8954 signaling questions with 1202 evidence passages), annotated using both manually and LLM-assisted methods, serving as a benchmark and enabling future research. Using this dataset, we benchmark ROB2 performance for 4 LLMs and provide an analysis into current model capabilities and ongoing challenges in automating this critical aspect of systematic review.


Community Detection on Model Explanation Graphs for Explainable AI

Moradi, Ehsan

arXiv.org Artificial Intelligence

Feature-attribution methods (e.g., SHAP, LIME) explain individual predictions but often miss higher-order structure: sets of features that act in concert. We propose Modules of Influence (MoI), a framework that (i) constructs a model explanation graph from per-instance attributions, (ii) applies community detection to find feature modules that jointly affect predictions, and (iii) quantifies how these modules relate to bias, redundancy, and causality patterns. Across synthetic and real datasets, MoI uncovers correlated feature groups, improves model debugging via module-level ablations, and localizes bias exposure to specific modules. We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.


Symbolically Scaffolded Play: Designing Role-Sensitive Prompts for Generative NPC Dialogue

Figueiredo, Vanessa, Elumeze, David

arXiv.org Artificial Intelligence

Large Language Models (LLMs) promise to transform interactive games by enabling non-player characters (NPCs) to sustain unscripted dialogue. Yet it remains unclear whether constrained prompts actually improve player experience. We investigate this question through The Interview, a voice-based detective game powered by GPT-4o. A within-subjects usability study ($N=10$) compared high-constraint (HCP) and low-constraint (LCP) prompts, revealing no reliable experiential differences beyond sensitivity to technical breakdowns. Guided by these findings, we redesigned the HCP into a hybrid JSON+RAG scaffold and conducted a synthetic evaluation with an LLM judge, positioned as an early-stage complement to usability testing. Results uncovered a novel pattern: scaffolding effects were role-dependent: the Interviewer (quest-giver NPC) gained stability, while suspect NPCs lost improvisational believability. These findings overturn the assumption that tighter constraints inherently enhance play. Extending fuzzy-symbolic scaffolding, we introduce \textit{Symbolically Scaffolded Play}, a framework in which symbolic structures are expressed as fuzzy, numerical boundaries that stabilize coherence where needed while preserving improvisation where surprise sustains engagement.